6 research outputs found

    Ubiquitous Emotion Recognition with Multimodal Mobile Interfaces

    Get PDF
    In 1997 Rosalind Picard introduced fundamental concepts of affect recognition. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices

    Comparing the effectiveness of a short-term vertical jump vs. weightlifting program on athletic power development

    Get PDF
    Efficient training of neuromuscular power and the translation of this power to sport-specific tasks is a key objective in the preparation of athletes involved in team-based sports. The purpose of the current study was to compare changes in center of mass (COM) neuromuscular power and performance of sport-specific tasks following short-term (6-week) training adopting either Olympic Style Weightlifting (WL) exercises or vertical jump (VJ) exercises. Twenty six recreationally active males (18-30 years; height: 178.7±8.3 cm; mass: 78.6±12.2 kg) were randomly allocated to either a WL or VJ training group and performance during the countermovement jump (CMJ), squat jump (SJ), depth jump (DJ), 20m sprint and the 5-0-5 agility test assessed pre- and post-training. Despite the WL group demonstrating larger increases in peak power output during the CMJ (WL group: 10% increase, d=0.701; VJ group: 5.78% increase, d=0.328) and SJ (WL group: 12.73% increase, d=0.854; VJ group: 7.27% increase, d=0.382), no significant between-group differences were observed in any outcome measure studied. There was a significant main effect of time observed for the three vertical jumps (CMJ, SJ, DJ), 0-5m and 0-20m sprint times, and the 5-0-5 agility test time, which were all shown to improve following the training (all main effects of time p<0.01). Irrespective of the training approach adopted by coaches or athletes, addition of either WL or VJ training for development of power can improve performance in tasks associated with team-based sports, even in athletes with limited pre-season training periods

    Face Recognition by Multi-Frame Fusion of Rotating Heads in Videos

    No full text
    This paper presents a face recognition study that implicitly utilizes the 3D information in 2D video sequences through multi-sample fusion. The approach is based on the hypothesis that continuous and coherent intensity variations in video frames caused by a rotating head can provide information similar to that of explicit shapes or range images. The fusion was done on the image level to prevent information loss. Experiments were carried out using a data set of over 100 subjects and promising results have been obtained: (1) under regular indoor lighting conditions, rank one recognition rate increased from 91 % using a single frame to 100 % using 7-frame fusion; (2) under strong shadow conditions, rank one recognition rate increased from 63 % using a single frame to 85 % using 7-frame fusion. I

    A Multi-Gesture Interaction System Using a 3D Iris Disk Model for Gaze Estimation and an Active Appearance Model for 3D Hand Pointing

    No full text
    Abstract—In this paper, we present a vision-based human–com-puter interaction system, which integrates control components using multiple gestures, including eye gaze, head pose, hand pointing, and mouth motions. To track head, eye, and mouth move-ments, we present a two-camera system that detects the face from a fixed, wide-angle camera, estimates a rough location for the eye region using an eye detector based on topographic features, and directs another active pan-tilt-zoom camera to focus in on this eye region. We also propose a novel eye gaze estimation approach for point-of-regard (POR) tracking on a viewing screen. To allow for greater head pose freedom, we developed a new calibration approach to find the 3-D eyeball location, eyeball radius, and fovea position. Moreover, in order to get the optical axis, we create a 3-D irisdiskbymappingboththe iris centerandiris contourpoints to the eyeball sphere. We then rotate the fovea accordingly and compute the final, visual axis gaze direction. This part of the system permits natural, non-intrusive, pose-invariant POR estimation from a dis-tance without resorting to infrared or complex hardware setups. We also propose and integrate a two-camera hand pointing estimation algorithm for hand gesture tracking in 3-D from a distance. The algorithms of gaze pointing and hand finger pointing are evaluated individually, and the feasibility of the entire system is validated through two interactive information visualization applications. Index Terms—Gaze estimation, hand tracking, human–com-puter interaction (HCI). I
    corecore